-
Notifications
You must be signed in to change notification settings - Fork 10
Generalize fused weight split #57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Hao Wu <[email protected]>
Signed-off-by: Hao Wu <[email protected]>
11e032a to
c178725
Compare
|
Feels like it may be heading the wrong direction, there will be more cases to support. Alternative is just take an orthoganize function and let users control how to split inside, as well as scale function and all rest of it. @FDecaYed @mkhona-nvidia let me know what do you think? |
|
/ok to test c178725 |
FDecaYed
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| split_grads_whitened = [self.orthogonalize_fn(g) for g in split_grads] | ||
| split_grad_scales = [self.scale_factor_fn(g.size(0), g.size(1)) for g in split_grads] | ||
|
|
||
| # TODO(skyw): Revisit whether there are cases that concatenating is not done along dim=0. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nn.conv1d (https://docs.pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) has 3d filter and so the output has to be reshaped to 3d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Valid point, that's also one more reason to let user supply orthogonalize function altogether instead of trying to generalize for everything.
Although conv specifically is a completely different case, all rest code assumes 2d, the scale function for example.
Signed-off-by: Hao Wu <[email protected]>
Signed-off-by: Hao Wu <[email protected]>
|
/ok to test 27aa01c |
Signed-off-by: Hao Wu <[email protected]>
|
/ok to test bd33fbb |
mkhona-nvidia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Decided to remove split fused parameters logic altogether, because how parameters are fused/stacked together is implementation dependent, it is hard to generalize for everything.
Instead, now it provides interface to plugin more sophisticated orthogonalize function and let user control how to split.